69 research outputs found

    The reentry hypothesis: The putative interaction of the frontal eye field, ventrolateral prefrontal cortex, and areas V4, IT for attention and eye movement

    Get PDF
    Attention is known to play a key role in perception, including action selection, object recognition and memory. Despite findings revealing competitive interactions among cell populations, attention remains difficult to explain. The central purpose of this paper is to link up a large number of findings in a single computational approach. Our simulation results suggest that attention can be well explained on a network level involving many areas of the brain. We argue that attention is an emergent phenomenon that arises from reentry and competitive interactions. We hypothesize that guided visual search requires the usage of an object-specific template in prefrontal cortex to sensitize V4 and IT cells whose preferred stimuli match the target template. This induces a feature-specific bias and provides guidance for eye movements. Prior to an eye movement, a spatially organized reentry from occulomotor centers, specifically the movement cells of the frontal eye field, occurs and modulates the gain of V4 and IT cells. The processes involved are elucidated by quantitatively comparing the time course of simulated neural activity with experimental data. Using visual search tasks as an example, we provide clear and empirically testable predictions for the participation of IT, V4 and the frontal eye field in attention. Finally, we explain a possible physiological mechanism that can lead to non-flat search slopes as the result of a slow, parallel discrimination process

    The reentry hypothesis: linking eye movements to visual perception

    Get PDF
    Cortical organization of vision appears to be divided into perception and action. Models of vision have generally assumed that eye movements serve to select a scene for perception, so action and perception are sequential processes. We suggest a less distinct separation. According to our model, occulomotor areas responsible for planning an eye movement, such as the frontal eye field, influence perception prior to the eye movement. The activity reflecting the planning of an eye movement reenters the ventral pathway and sensitizes all cells within the movement field so the planned action determines perception. We demonstrate the performance of the computational model in a visual search task that demands an eye movement toward a target

    Attentional selection of noncontiguous locations: The spotlight is only transiently “split"

    Get PDF
    It is still a matter of debate whether observers can attend simultaneously to more than one location. Using essentially the same paradigm as was used previously by N. P. Bichot, K. R. Cave, and H. Pashler (1999), we demonstrate that their finding of an attentional “split” between separate target locations only reflects the early phase of attentional selection. Our subjects were asked to compare the shapes (circle or square) of 2 oddly colored targets within an array of 8 stimuli. After a varying stimulus onset asynchrony (SOA), 8 letters were flashed at the previous stimulus locations, followed by a mask. For a given SOA, the performance of subjects at reporting letters in each location was taken to reflect the distribution of spatial attention. In particular, by considering the proportion of trials in which none or both of the target letters were reported, we were able to infer the respective amount of attention allocated to each target without knowing, on a trial-by-trial basis which location (if any) was receiving the most attentional resources. Our results show that for SOAs under 100–150 ms, attention can be equally split between the two targets, a conclusion compatible with previous reports. However, with longer SOAs, this attentional division can no longer be sustained and attention ultimately settles at the location of one single stimulus

    Computational models of basal-ganglia pathway functions: focus on functional neuroanatomy

    Get PDF
    Over the past 15 years, computational models have had a considerable impact on basal-ganglia research. Most of these models implement multiple distinct basal-ganglia pathways and assume them to fulfill different functions. As there is now a multitude of different models, it has become complex to keep track of their various, sometimes just marginally different assumptions on pathway functions. Moreover, it has become a challenge to oversee to what extent individual assumptions are corroborated or challenged by empirical data. Focusing on computational, but also considering non-computational models, we review influential concepts of pathway functions and show to what extent they are compatible with or contradict each other. Moreover, we outline how empirical evidence favors or challenges specific model assumptions and propose experiments that allow testing assumptions against each other

    Spatial Synaptic Growth and Removal for Learning Individual Receptive Field Structures

    Get PDF
    One challenge in creating neural models of the visual system is the appropriate definition of the connectivity. The modeler constrains the results with its definition. Unfortunately, there is often just insufficient information about connection sizes available, e.g. for deeper layer or different neuron types like interneurons. Hence, a mechanism refining the connection structure based on the learnings would be appreciated. Such mechanism can be found in the human brain by structural plasticity. That is, the formation and removal of synapses. For our model, we exploit that synaptic connections are likely to be formed in the proximity of other synapses and that synapse removal is related to the volume of the spine forming the synapse. We implemented these mechanisms as probabilistic processes. The probability for synapse formation is determined by the strength of the neighboring synapses and synapse removal depends on the weight strength. We demonstrate the functioning in a model of the visual areas V1 and V2. The model learns biologically plausible receptive fields, while developing connection matrices closely fitting the learned receptive fields. We show that connections grow and retract with learning and, thus, receptive fields are not restricted to their initial boundaries. Nevertheless, the initial retinotopic organization of the neurons is preserved. Testing the ability to overcome the modeler\u27s bias by varying the size of the initial connection matrix, shows that all versions develop similar receptive field sizes. Hence, we suggest structural plasticity as suitable mechanism for learning diverse receptive field structures while overcoming the modeler\u27s bias

    Object Recognition and Visual Search with a Physiologically Grounded Model of Visual Attention

    Get PDF
    Visual attention models can explain a rich set of physiological data (Reynolds & Heeger, 2009, Neuron), but can rarely link these findings to real-world tasks. Here, we would like to narrow this gap with a novel, physiologically grounded model of visual attention by demonstrating its objects recognition abilities in noisy scenes. To base the model on physiological data, we used a recently developed microcircuit model of visual attention (Beuth & Hamker, in revision, Vision Res) which explains a large set of attention experiments, e.g. biased competition, modulation of contrast response functions, tuning curves, and surround suppression. Objects are represented by object-view specific neurons, learned via a trace learning approach (Antonelli et al., 2014, IEEE TAMD). A visual cortex model combines the microcircuit with neuroanatomical properties like top-down attentional processing, hierarchical-increasing receptive field sizes, and synaptic transmission delays. The visual cortex model is complemented by a model of the frontal eye field (Zirnsak et al., 2011, Eur J Neurosci). We evaluated the model on a realistic object recognition task in which a given target has to be localized in a scene (guided visual search task), using 100 different target objects, 1000 scenes, and two backgrounds. The model achieves an accuracy of 92% at black, and of 71% at white-noise backgrounds. We found that two of the underlying, neuronal attention mechanisms are prominently relevant for guided visual search: amplification of neurons preferring the target; and suppression of neurons encoding distractors or background noise

    A Computational Model of Basal Ganglia and its Role in Memory Retrieval in Rewarded Visual Memory Tasks

    Get PDF
    Visual working memory (WM) tasks involve a network of cortical areas such as inferotemporal, medial temporal and prefrontal cortices. We suggest here to investigate the role of the basal ganglia (BG) in the learning of delayed rewarded tasks through the selective gating of thalamocortical loops. We designed a computational model of the visual loop linking the perirhinal cortex, the BG and the thalamus, biased by sustained representations in prefrontal cortex. This model learns concurrently different delayed rewarded tasks that require to maintain a visual cue and to associate it to itself or to another visual object to obtain reward. The retrieval of visual information is achieved through thalamic stimulation of the perirhinal cortex. The input structure of the BG, the striatum, learns to represent visual information based on its association to reward, while the output structure, the substantia nigra pars reticulata, learns to link striatal representations to the disinhibition of the correct thalamocortical loop. In parallel, a dopaminergic cell learns to associate striatal representations to reward and modulates learning of connections within the BG. The model provides testable predictions about the behavior of several areas during such tasks, while providing a new functional organization of learning within the BG, putting emphasis on the learning of the striatonigral connections as well as the lateral connections within the substantia nigra pars reticulata. It suggests that the learning of visual WM tasks is achieved rapidly in the BG and used as a teacher for feedback connections from prefrontal cortex to posterior cortices

    Predictions of a model of spatial attention using sum-and max-pooling functions

    Get PDF
    Assuming a convergent projection within a hierarchy of processing stages stimuli from different areas of the receptive ,eld project onto the same population of cells. Pooling over space a-ects the representation of individual stimuli, and thus its understanding is crucial for attention and ultimately for object recognition. Since attention, in turn, is likely to modify such spatial pooling by changing the competitive weight of individual stimuli, we compare the predictions of sum- and max-pooling methods using a model of attention. Both pooling functions can account for data investigating the competition between a pair of stimuli within a V4 receptive ,eld; however, our model using sum-pooling predicts a di-erent tuning curve. If we present an additional probe stimulus with the pair, sum-pooling predicts a bottom-up bias of attention, whereas the competition for attention using max-pooling is robust against the additional stimulus

    Learning Object Representations for Modeling Attention in Real World Scenes

    Get PDF
    Models of visual attention have been rarely used in real world tasks as they have been typically developed for psychophysical setups using simple stimuli. Thus, the question remains how objects must be represented to allow such models an operation in real world scenarios. We have previously presented an attention model capable of operating on real-world scenes (Beuth, F., and Hamker, F. H. 2015, NCNC, which is a successor of Hamker, F. H., 2005, Cerebral Cortex), and show here how its object representations have been learned. We have used a learning rule based on temporal continuity (Földiák, P., 1991, Neural Computation) to ensure biological plausibility. Yet, temporal continuity learning rules have not been used in a real world context, thus, we conducted an improvement: We increased the postsynaptic threshold to make the learning more specific, resulting in object-encoding cells reacting mainly specific for their preferred objects. Furthermore, we present a novelty in relation to Beuth, F. and Hamker, F. H., 2015: the learning of object representation invariant towards the background. It is currently unknown how such representations are learned by the human brain. Suggestions have been made to use disparity or motion, whereas we propose temporal continuity learning. This principle learns connections from presynaptic features which are stable over time. As the object changes much less than the background over time, strong connections are primarily learned to the object and no connections to the background. Such learned representations allow the attention model to identify and locate objects in real world scenes

    a code generation approach to neural simulations on parallel hardware

    Get PDF
    Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions
    corecore